The proposed Face Melody analyzes the user’s facia l expressions in real time. Through the user’s webcam, the system captures emotional cues, such as joy, sadness, or excitement, and processes this data to understand the user’s current mood.
Face Melody leverages a sophisticated recommendation engine that correlates the detected mood with a vast music database, ensuring that the music selection aligns with the user’s emotional context.
The recommendation engine considers a variety of factors, including tempo, genre, lyrics, and historical preferences to provide a personalized and emotionally resonant playlist. The system also adapts in real-time, allowing users to change their music selection as their mood evolves. Whether a user is looking to lift their spirits, relax, or reflect, face Melody is designed to cater to these emotional shifts
Introduction
I. INTRODUCTION
Music has been a significant pa rt of human culture and expression for centuries. It holds the power to evoke emotions, create memories, and provide solace.
With the digital revolution, music consumption has become more personalized than ever before, thanks to the advent of music streaming platforms and recommendation systems.
However, these systems a re often limited to algorithms that analyze user history, preferences, and other contextual data, neglectinga crucia l element of human experience: emotions.
Emotions play a pivotal role in how music affects us. A song that resonates with one person during a moment of happiness may not have the same impact duringa moment of sadness. This interplay between music and emotions has been the subject of numerous studies and has given rise to the field of music psychology. It has long been recognized that music can either enhance or alter one's emotional state.
The FaceMelody transforms the music listening experience, enhancing user engagement and emotiona l well-being. This project represents a harmonious blend of technology and human emotion to create a more personalized and emotionally resonant music experience.
II. LITERATURE SURVEY
A. Dharmendra Roy,Anjali.CH,G. Kavya Sri, B. Tharun, K. Venu Gopal
Dha rmendra Roy, Anjali. CH, G. Kavya Sri, B. Tha run, K. Venu Gopa lproposed a system that extra cts initia l or ra w data from faces and reduces it to many other cla sses using methods like principal component ana lysis (PCA) and Fisher's Linear Discriminant method (LDA).
The system uses facia l recognition technology to detect the user's emotions, which can be more accura te than relying on self- reported mood.
B. Mrs. P. P. Kambare, Dr. S. T. Patil, DY Patil
Mrs. P. P. Kamba re, Dr. S. T. Patil, D Y Patil proposed a system that captures facia l expressions through a webcam and ana lyzes them using a convolutional neura l network (CNN) to recognize emotions and then maps to a set of songs that match the user's mood, based on sentiment ana lysis of song lyrics and metadata. The system canadapt to users' listening patterns and preferences over time, providing fresh and relevant recommendations.
C. Ankita Mahadik, Shambhavi Milgir, Prof. Vaishali Kavathekar
Ankita Mahadik, Shambhavi Milgir, Prof. Va ishali Kavathekar proposed a mood based music pla yer and which creates performs real time mood detection and suggests songs as per detected mood. This becomes an additionalfeature to the tra ditiona lmusic pla yer apps that come pre-installed in our mobile phones. Neura lnetworks and machine lea rning have been used for these tasks and have obtained good results.
D. Magnus, Mortensen, Cathal, Gurrinand, Dag Johansen
Ma gnus, Mortensen, Cathal, Gurrinand, Da g Johansen proposed a novel music recommendation system that incorporates both collaborative filtering and mood-based recommendations.
This mood-based recommendation is positively evaluated on a closed set of user listening data, retrospectively gathered with recommendations based on user’s playback history.
III. PROPOSEDWORK
Personalization: Facial expression-based systems can provide highly personalized music recommendations by taking into account the user's current emotional state. This can lead to a more engaging and enjoyable music-listening experience.
Emotion Detection: They can detect and respond to a wide range of emotions, including happiness, sadness, excitement, and relaxation, enablinga more comprehensive emotional connection with the user.
Mood Regulation: Such systems can help users regulate their emotions by recommending music that matches their desired mood, which can be beneficial for mental health and well-being.
A. Design of the System
The emotion recognition model is tra ined on the FER 2013 dataset. It can detect 7 emotions. The project works by getting a live video feed from a webcam and passing it through the model to get a prediction of emotion. Then according to the emotion predicted, the app will fetch a playlist of songs from Spotify through a spotify wrapper and recommend the songs by displaying them on the screen.
B. Algorithms and Techniques used
Convolutional Neural Networks (CNNs): CNNs are a type of deep learning model that is particularly effective for image recognition tasks.
They consist of multiple layers of convolutional and pooling operations, followed by fully connected layers for classification. CNN model is used for facial emotion recognition. The model architecture consists of several convolution a land pooling layers, followed by fully connected layers.
The model is loaded with pre-trained weights from a file. A pre-trained convolutional neural network (CNN) model is used to recognize facial emotions. The model takes an input image of a face and predicts the emotion associated with it. By training a CNN model on a dataset of labeled facial images, we can develop a powerful tool for emotion recognition.
C. Techniques
Image Da ta Genera tor: The ImageDa ta Generator class in Kera s is a powerful tool for data augmentation and preprocessing. It allows us to generate batches of augmented image data on the fly, which helps improve the performance and generalization of our model. • Face Cascade Classifier: It's a technique used for face detection in images and videos. The code initializes a face cascade classifier using the "haarcascade_frontalface_default.xml" file.
This classifier is used to detect faces in the video frames. The "Video Camera " class, is responsible for capturing video frames, performing emotion recognition, and recommending music.
The class uses the face cascade classifier to detect faces, extracts the facial region of interest (ROI), and feeds it to the emotion model for prediction. It also retrieves the corresponding music recommendations from the CSV files based on the detected emotion.
Conclusion
The Face Melody Mood-Awa re Music Recommendation System with Facial Recognition leverages cutting-edge technology to enhance the music listening experience. By analyzing users\' facial expressions and moods in real time, it delivers personalized music recommendations that align with their emotional state, creating a more immersive and enjoyable musical journey. It has a voice assistant.